How Oversight Bodies Access Information in the Digital Economy—and Where It Falls Short

For a while now, I have been thinking about (and working on) transparency in digital systems from the perspective of the users. My earlier work looked at transparency in the Internet of Things, where I found that users have little to no meaningful technical or legal means to understand where their data goes, who receives it, and what happens to their data (see my paper “Rights Out of Sight”). Later, working with users in the context of women’s health apps, I explored how we might co-design mechanisms for transparency and control over data use. That work started in a specific domain, but the lessons travel quite well across design contexts more broadly. This is written down in my paper on data transparency and control.

What became clear to me over time, though, is something many people already intuitively feel: it is unrealistic to expect individuals to monitor and micromanage their own information all the time. There is simply too much going on. We use too many applications. The underlying infrastructures are too interconnected. The data flows are too opaque. And, bluntly, the effort required rarely matches the convenience of digital services.

So I turned to the institutions that are supposed to protect us: oversight bodies. They are the ones expected to protect our digital rights and intervene when organisations cause harm. But that raised a rather basic question for me: do oversight bodies themselves actually have meaningful visibility into what organisations are doing in the digital economy?

New laws and governance frameworks keep arriving, covering data, AI, cybersecurity, platforms, and connected devices. But, as I write in my paper:

“Oversight depends on access to relevant information. While more information does not automatically lead to more effective enforcement, meaningful oversight is difficult to achieve without access to the information needed. Indeed, when oversight bodies lack information, their ability to detect harm, monitor compliance, and ultimately enforce rules is necessarily constrained. In such cases, regulatory frameworks may risk losing practical effectiveness, as there are limited means to assess whether regulated actors are in fact meeting their obligations and respecting rights.”

That is the starting point for my new paper, which has been accepted to ACM FAccT 2026. I am still preparing the final version, but I can already share some insights. The study is based on interviews with 21 senior professionals from 19 oversight bodies across the EU, EU member states, and the UK, including regulators, consumer organisations, digital rights NGOs, certification and audit bodies, and policy or standards actors. The paper deliberately takes a broader view of oversight than regulators alone, and it is explicitly cross-technology rather than narrowly AI-focused. In practice, AI is entangled with data, cloud infrastructure, platforms, and connected devices anyway, so real-world oversight problems rarely stay inside one neat box.

What mechanisms do oversight bodies actually use to access information?

One of the main contributions of the paper is to describe oversight as an information pipeline. That sounds slightly more orderly than it is in reality, but it is useful. Oversight bodies first need to detect signals, then decide which of those signals matter, then investigate, and finally sometimes feed information back out as guidance.

Phase 1: market monitoring

Signals come from three places. Some are generated by oversight bodies themselves, through horizon scanning, audits, surveys, scraping, device testing, and other technical monitoring. Those practices exist, but they are still relatively uncommon. Much more often, oversight begins because somebody else noticed something first: journalists, academic researchers, NGOs, complainants, or other authorities. The media, in particular, seems to play a surprisingly strong agenda-setting role. Oversight also receives signals from companies themselves, especially through breach notifications, incident reporting, or when firms proactively seek guidance.

Phase 2: prioritisation

Not every signal becomes a case. Oversight bodies usually triage based on risk, scale, and likely harm. They look for systemic patterns rather than one-off anomalies, and they often prioritise issues affecting many people, vulnerable groups, or sensitive data. This sounds reasonable, but it also means a great deal may never receive scrutiny at all. One participant noted that they engage with only about 30% of reported breaches.

Phase 3: case investigation

Here the findings are both unsurprising and a bit depressing. Investigations are still heavily document-led. Oversight bodies often begin with policies, self-assessments, contracts, impact assessments, certification files, and internal reports provided by the organisation under scrutiny. Interviews and meetings are common. On-site inspections, system forensics, and lab-based technical testing do happen, but they are resource-intensive and far less common. In other words, the deepest forms of scrutiny are still the exception rather than the rule.

Guidance

Oversight is not only punitive. Bodies also produce advice, warnings, educational materials, and policy feedback. Sometimes this is a dialogue with firms. Sometimes it is public-facing rights education. Sometimes it is oversight bodies informing one another, or informing policymakers that something harmful is happening even if current law does not yet cleanly address it.

What challenges get in the way?

This is where things start to bite. The paper identifies six recurring challenges across that information pipeline. Together they point to a fairly stark conclusion: oversight bodies often do not have the visibility they would need for robust oversight in a digital economy.

  1. Paper versus practice. Oversight relies heavily on documentation produced by the very organisations being overseen. That creates a structural asymmetry. Documents may be formally complete while still being strategically framed, selective, or detached from what actually happens in practice.

  2. Fragmented oversight in a cross-cutting landscape. Real incidents cut across data protection, AI, platform governance, consumer law, product safety, and cybersecurity, but institutions are still organised in narrower silos. This makes it easy for issues to fall between mandates.

  3. Skills, culture, and capacity constraints. Technology moves quickly; oversight institutions generally do not. Limited staff, limited time, limited technical expertise, and procedural drag all narrow what can be investigated.

  4. Poor visibility into supply chains and cross-border flows. Data, models, software components, and infrastructure move across organisations and jurisdictions in ways that are difficult to trace. Responsibility becomes muddy just where scrutiny most needs it to become crisp.

  5. Information overload and low interpretability. Even when organisations disclose information, they may disclose too much, too opaquely, or without the context needed to interpret it. A 250-page disclosure is not the same thing as oversight.

  6. Insufficient engagement with affected communities. Citizens are often expected to complain, but are rarely meaningfully integrated into oversight. Many harms are invisible to the people experiencing them, especially in data-driven or discriminatory systems.

I found it useful to think of these not as isolated problems but as one recurring condition: persistent information asymmetry.

How do companies play games with oversight bodies?

This was one of the most fascinating parts of the study. Several interviewees described ways in which organisations appear to manage oversight strategically rather than simply comply with it. Sometimes this is about framing. One regulator said firms tend to present “the good side” of their practices and risk telling “their story rather than... the facts.” Another participant noted that organisations may provide only “70% of the truth.”

Sometimes it is about doing the minimum. In discussion of self-assessment regimes, one participant pointed to commercial pressure to move quickly (i.e. put there AI products on the market ASAP) and said companies may choose to “do the minimum in the self-assessment instead of doing the maximum.”

Sometimes it is semantic. A consumer organisation described disputes where firms effectively say: “you call that ‘selling’ data, but we call it something else”. That is not merely a disagreement over vocabulary; it is an attempt to shift the regulatory category itself.

Sometimes it is temporal. One interviewee said that “the second they get the whiff of what we’re doing, they do subtle changes that change the facts on the ground.” Another described procedural hurdles used to “slow things down,” while another noted that big companies had understood early that authorities have limited resources and can simply drag cases out.

It all links to power asymmetries. As one participant put it, “Those companies have huge muscles.” That matters because when oversight depends on what firms choose to reveal, and when firms are better resourced than the institutions scrutinising them, delay and partial visibility become strategic assets.

I do not think that every firm is behaving in bad faith all the time. That would be too easy, and probably too neat. But it does show that the current arrangement gives organisations, including large technology firms, substantial room to manoeuvre. In some cases, that room looks uncomfortably like game-playing.

What should we do instead?

Before getting into possible paths forward, I should say that I came away from these interviews impressed about the people doing this work. Indeed, I spoke with very senior, top-level professionals across different oversight bodies. Despite high workloads, limited resources, and constraints (see above), I was impressed by their creativity and innovation. Many are actively experimenting with new approaches to oversight and continuously look for ways to improve how oversight works in practice.

Across the interviews, a number of recurring themes emerged as potential ways forward.

Stronger verification beyond paperwork.

If oversight bodies are stuck reading documents written by the companies they oversee, they will remain structurally dependent on curated accounts. The study points toward more scraping, technical audits, system testing, lab evaluation, and data-driven surveys. In short: less faith in paper, more attempts to observe practice.

Some concrete practices that could support this shift include:

  • investing in technical investigative capacity within oversight bodies

  • developing shared monitoring infrastructures (e.g., scraping tools, device testing labs, measurement platforms)

  • conducting independent empirical studies of digital services

  • expanding technical audit powers and access to systems

  • collaborating with academic researchers and technical experts to analyse systems in practice

These approaches aim to reduce dependence on company-provided documentation and enable oversight bodies to observe how systems actually operate in reality.

Better coordination across the oversight ecosystem.

Since digital harms are cross-technology and cross-jurisdictional, oversight cannot remain neatly partitioned either. Regulators, auditors, certification bodies, NGOs, consumer organisations, journalists, and researchers all see different parts of the elephant. The challenge is to build ways of sharing signals, expertise, and priorities without drowning everyone in process.

Participants described several practices already emerging in this direction:

  • cross-regulator cooperation and joint task forces across regulatory mandates

  • structured information-sharing networks between oversight actors

  • collaboration with journalists, researchers, and civil society organisations who often surface early signals of harm

  • coordination across EU and national oversight bodies

  • developing shared investigative priorities and risk frameworks

Rather than trying to centralise oversight into a single authority, the aim is to strengthen the networked ecosystem of oversight so that signals about emerging harms can circulate more effectively.

Stronger participatory and intermediary-based oversight.

Individuals cannot realistically monitor digital infrastructures on their own, but that does not mean they should disappear from the picture. The more plausible route is through intermediaries: consumer groups, NGOs, human-rights organisations, collective complaints, and community-based monitoring. These can act as connective tissue between lived experience and formal oversight.

Interviewees highlighted several practices that could strengthen this intermediary role:

  • enabling collective complaint mechanisms and representative actions

  • supporting civil society watchdog organisations that investigate digital harms

  • creating structured channels for community reporting and whistleblowing

  • funding public interest research and monitoring initiatives

  • developing participatory oversight processes that integrate user perspectives into investigations

These approaches recognise that citizens may not be able to monitor systems individually, but collective and intermediary forms of oversight can still surface harms that institutions would otherwise miss.

Final note

The issue is not merely that users lack transparency. It is that even the bodies tasked with protecting users often lack the visibility, interpretive capacity, and institutional position needed to see clearly into the systems they are meant to oversee. If that is true, then the question is not just whether organisations comply with the law. It is whether our oversight arrangements are currently capable of knowing when they do not.

That seems like a useful question for the next phase of digital governance.

Source paper: to be published at ACM FAccT